skip to main content


Search for: All records

Creators/Authors contains: "Cooper, Seth"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Citizen science games must balance task difficulty with player skill to ensure optimal engagement and performance. This issue has been previously addressed via player-level matchmaking, a dynamic difficulty adjustment method in which player and level ratings are used to present levels best suited for players' individual abilities. However, this work has been done in small, isolated test games and left out potential techniques that could further improve player performance. Therefore, we examined the effects of player-level matchmaking in Foldit, a live citizen science game. An experiment with 221 players demonstrated that dynamic matchmaking approaches led to significantly more levels completed, as well as a more challenging highest level completed, compared to random level ordering, but not greater than a static approach. We conclude that player-level matchmaking is worth consideration in the context of live citizen science games, potentially paired with other dynamic difficulty adjustment methods. 
    more » « less
  2. Citizen science projects that rely on human computation can attempt to solicit volunteers or use paid microwork platforms such as Amazon Mechanical Turk. To better understand these approaches, this paper analyzes crowdsourced image label data sourced from an environmental justice project looking at wetland loss off the coast of Louisiana. This retrospective analysis identifies key differences between the two populations: while Mechanical Turk workers are accessible, cost-efficient, and rate more images than volunteers (on average), their labels are of lower quality, whereas volunteers can achieve high accuracy with comparably few votes. Volunteer organizations can also interface with the educational or outreach goals of an organization in ways that the limited context of microwork prevents. 
    more » « less
  3. Abstract

    Wetland loss is increasing rapidly, and there are gaps in public awareness of the problem. By crowdsourcing image analysis of wetland morphology, academic and government studies could be supplemented and accelerated while engaging and educating the public. The Land Loss Lookout (LLL) project crowdsourced mapping of wetland morphology associated with wetland loss and restoration. We demonstrate that volunteers can be trained relatively easily online to identify characteristic wetland morphologies, or patterns present on the landscape that suggest a specific geomorphological process. Results from a case study in coastal Louisiana revealed strong agreement among nonexpert and expert assessments who agreed on classifications at least 83% and at most 94% of the time. Participants self‐reported increased knowledge of wetland loss after participating in the project. Crowd‐identified morphologies are consistent with expectations, although more work is needed to directly compare LLL results with previous studies. This work provides a foundation for using crowd‐based wetland loss analysis to increase public awareness of the issue, and to contribute to land surveys or train machine learning algorithms.

     
    more » « less
  4. null (Ed.)